Members
Overall Objectives
Research Program
Application Domains
Software and Platforms
New Results
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Understanding embodied neural systems

Participants : Dominique Martinez, Carlos Carvajal-Gallardo, Georgios Detorakis.

Bio-physical modeling and embodied olfaction

Our understanding of the computations that take place in the human brain is limited by the extreme complexity of the cortex, and by the difficulty of experimentally recording neural activities, for practical and ethical reasons. The Human Genome Project was preceded by the sequencing of smaller but complete genomes. Similarly, it is likely that future breakthroughs in neuroscience will result from the study of smaller but complete nervous systems, such as the insect brain or the rat olfactory bulb. These relatively small nervous systems exhibit general properties that are also present in humans, such as neural synchronization and network oscillations. Our goal has been therefore to understand the role of these phenomena by combining biophysical modelling and experimental recordings, before applying this knowledge to humans. In the last year, we have extended our neuronal model of the insect olfactory system. This model is capable of reproducing and explaining the stereotyped multiphasic firing pattern observed in pheromone sensitive antennal lobe neurons [10] .

Using this model in robotic experiments and insect antennae as olfactory sensors, we related these multiphasic responses to action selection. The efficiency of the model for olfactory searches was demonstrated in driving the robot toward a source of pheromones. Two different classes of strategies are possible for olfactory searches, those based on a spatial map, e.g. Infotaxis, and those where the casting-and-zigzagging behaviour observed in insects is purely reactive, without any need for an internal memory, representation of the environment, or inference [15] . Our goal was to investigate this question by implementing infotactic and reactive search strategies in a robot and test them in real environmental conditions. We previously showed that robot Infotaxis produces trajectories that feature zigzagging and casting behaviours similar to those of moths, is robust and allows for rapid and reliable search processes. We have implemented infotactic and reactive search strategies in a cyborg using the antennae of a tethered moth as sensors, since no articial sensor for pheromone molecules is presently known [10] .

Somato-sensory cortex

In a joint work with the Mnemosyne team, we have investigated the formation and maintenance of ordered topographic maps in the primary somatosensory cortex as well as the reorganization of representations after sensory deprivation or cortical lesion. We consider both the critical period (postnatal) where representations are shaped and the post-critical period where representations are maintained and possibly reorganized. We hypothesize that feed-forward thalamocortical connections are an adequate site of plasticity while cortico-cortical connections are believed to drive a competitive mechanism that is critical for learning. We model a small skin patch located on the distal phalangeal surface of a digit as a set of 256 Merkel ending complexes (MEC) that feed a computational model of the primary somatosensory cortex (area 3b). This model is a two-dimensional neural field where spatially localized solutions (a.k.a. bumps) drive cortical plasticity through a Hebbian-like learning rule. Simulations explain the initial formation of ordered representations following repetitive and random stimulations of the skin patch. Skin lesions as well as cortical lesions are also studied and results confirm the possibility to reorganize representations using the same learning rule and depending on the type of the lesion. For severe lesions, the model suggests that cortico-cortical connections may play an important role in complete recovery [11] , [19] , [7] .

K-cells in visuomotor tasks

In another joint work with the Mnemosyne team, we have explored the role of the thalamus in visuomotor tasks implicating non-standard ganglion cells. Such cells in the retina have specific loci of projection in the visuomotor systems and particularly in the thalamus and the superior colliculus. In the thalamus, they feed the konio pathway of the LGN. Exploring the specificities of that pathway, we discovered it could be associated to the matrix system of thalamo-cortical projections, known to allow for diffuse patterns of connectivity and to play a major role in the synchronization of cortical regions by the thalamus. An early model led to the design of the corresponding information flows in the thalamo-cortical system, that we expanded, in the framework of the Keops project, to be applied to real visuomotor tasks [13] .

We proposed to implement the computational principles raised by the study on the K-cells of the retina using a variational specification of the visual front-end, with an important consequence. In such a framework, the GC are not to be considered individually, but as a network, yielding a mesoscopic view of the retinal process. Given natural image sequences, fast event-detection properties appear to be exhibited by the mesoscopic collective non-standard behavior of a subclass of the so-called dorsal and ventral konio-cells (K-cells) that correspond to specific retinal output. We considered this visual event detection mechanism to be based on image segmentation and specific natural statistical recognition, including temporal pattern recognition, yielding fast region categorization. We discussed how such sophisticated functionalities could be implemented in the biological tissues as a unique generic two-layered non-linear filtering mechanism with feedback. We used computer vision methods to propose an effective link between the observed functions and their possible implementation in the retinal network. The available computational architecture is a two-layers network with non-separable local spatio-temporal convolution as input, and recurrent connections performing non-linear diffusion before prototype based visual event detection [17] .